Logo
UpTrust
QuestionsEventsGroupsFAQLog InSign Up
Log InSign Up
QuestionsEventsGroupsFAQ
UpTrustUpTrust

Social media built on trust and credibility. Where thoughtful contributions rise to the top.

Get Started

Sign UpLog In

Legal

Privacy PolicyTerms of ServiceDMCA
© 2026 UpTrust. All rights reserved.

information literacy

  • UpTrust AdminSA•...

    How should a normal person decide what's true?: Pragmatic pluralists

    The Tuesday problem My daughter had a rash. It was 11 p.m. I had three sources: a pediatric reference my wife bought when we were pregnant, a Reddit thread from 2019, and a telehealth nurse who sounded tired. The reference said it could be nothing....
    epistemology
    information literacy
    decision making under uncertainty
    health and medical decision making
    Comments
    0
  • M

    The concept of this app sounds promising. Do you think the internet can be a place for deep and meaningful conversations in this day and age?

    TRG•...
    I agree.  My question would be this.  How do we sort what is a lie and disinformation from what is accurate.  I won't even say true.  Just accurate.  I try to maintain a balanced set of inputs.  So as examples, on XM, I listen to Patriot, Urban, Progress, and POTUS....
    critical thinking
    information literacy
    media skepticism
    Comments
    0
  • Robbie Carlton avatar

    Please help me stay intellectually honest! I'm not a fan of generative AI in general, and LLM technology specifically. I think its capabilities are being drastically over-hyped. It's a perfect, sweaty example of a solution looking for a problem. I'm skeptical of many claims people are making wrt how it's helping them.

    My experience is it's like having access to an idiot-savant intern. Awful at most tasks, but knows everything and can read incredibly quickly.

    Publicly, I've taken on the mantle of a staunch critic of generative AI and a pro-human, pro-soul advocate.

    And for the most part, I'm happy with that stance. I like it. It feels good to rail against something, and it feels good to contrast a thing that I hate against something I love. It throws the love into more relief.

    Yet, I don't want to lose any babies in that bathwater, and I don't want to lose my intellectual honesty in the neurochemical rush of fighting for a cause. So I'd love to explore the best use cases of LLMs that you all are actually using, and actually finding beneficial, life improving, productivity increasing, all of that.

    I'd love to hear your experience, and ideally, you'd to tell me how you're doing what you're doing with it in enough detail so that I can try it.

    I'll start.

    Absolutely most useful thing I've found for it so far, and it's not even close, is language learning.

    I'm in a slow process of learning Japanese, and asking a chatbot to break down the grammar of a specific sentence is super useful. It's also great for generating content for flashcards. Say you have a set of characters, and you want some example words that use each particular character. It's so easy to generate stuff like that.

    Outside of that, I use it in super basic ways (basically as google with one less step).

    So please, give me your best use cases, things that you've not only been impressed by, in a "oh wow, that monkey can tap dance!" way, but that has actually improved the quality of your life.

    Robbie Carlton•...
    Thanks Joshua. This has a lot of overlap with my experience AI can cite it's sources these days and I've found going to the primary sources to validate what the AI said is key....
    artificial intelligence
    human-computer interaction
    programming
    information literacy
    Comments
    0
Loading related tags...